A topological data analysis library.
Core algorithm written in Scala, using Apache Spark.
Executed in a Jupyter notebook, using the Apache Toree kernel and declarative widgets.
Graphs rendered with Sigma/Linkurious, wrapped in a Polymer component.
In [1]:
%AddDeps org.apache.spark spark-mllib_2.10 1.6.2 --repository file:/Users/tmo/.m2/repository
%AddDeps org.scalanlp breeze-natives_2.10 0.12 --repository file:/Users/tmo/.m2/repository
%AddDeps com.github.haifengl smile-core 1.1.0 --transitive --repository file:/Users/tmo/.m2/repository
%AddDeps io.reactivex rxscala_2.10 0.26.1 --transitive --repository file:/Users/tmo/.m2/repository
%AddDeps com.softwaremill.quicklens quicklens_2.10 1.4.4 --repository file:/Users/tmo/.m2/repository
%AddDeps com.chuusai shapeless_2.10 2.3.0 --repository https://oss.sonatype.org/content/repositories/releases/ --repository file:/Users/tmo/.m2/repository
%AddDeps org.tmoerman plongeur-spark_2.10 0.3.22 --repository file:/Users/tmo/.m2/repository
In [2]:
%addjar http://localhost:8888/nbextensions/declarativewidgets/declarativewidgets.jar
In [4]:
import rx.lang.scala.{Observer, Subscription, Observable}
import rx.lang.scala.subjects.PublishSubject
import rx.lang.scala.subjects._
import shapeless.HNil
import org.tmoerman.plongeur.tda._
import org.tmoerman.plongeur.tda.Model._
import org.tmoerman.plongeur.tda.cluster.Clustering._
import org.tmoerman.plongeur.tda.cluster.Scale._
import org.tmoerman.plongeur.ui.Controls._
import declarativewidgets._
initWidgets
import declarativewidgets.WidgetChannels.channel
In [6]:
import java.util.concurrent.atomic.AtomicReference
case class SubRef(val ref: AtomicReference[Option[Subscription]] = new AtomicReference[Option[Subscription]](None)) extends Serializable {
def update(sub: Subscription): Unit = ref.getAndSet(Option(sub)).foreach(old => old.unsubscribe())
def reset(): Unit = update(null)
}
In [7]:
%%html
<link rel='import' href='urth_components/paper-slider/paper-slider.html'
is='urth-core-import' package='PolymerElements/paper-slider'>
<link rel='import' href='urth_components/paper-button/paper-button.html'
is='urth-core-import' package='PolymerElements/paper-button'>
<link rel='import' href='urth_components/plongeur-graph/plongeur-graph.html'
is='urth-core-import' package='tmoerman/plongeur-graph'>
<link rel='import' href='urth_components/urth-viz-scatter/urth-viz-scatter.html' is='urth-core-import'>
Out[7]:
Keep references to Rx subscriptions apart.
In [6]:
val in$_subRef = SubRef()
Instantiate a PublishSubject. This stream of TDAParams instances represents the input of a TDAMachine. The PublishSubject listens to changes and sets these to the channel "ch_TDA_1" under the "params" key.
TODO: unsubscribe previous on re-evaluation
In [7]:
val in$ = PublishSubject[TDAParams]
in$_subRef.update(in$.subscribe(p => channel("ch_TDA_1").set("params", p.toString)))
Create an initial TDAParams instance. In the same cell, we submit the instance to the PublishSubject.
In [59]:
val tdaParams =
TDAParams(
lens = TDALens(
Filter("PCA" :: 0 :: HNil, 16, 0.45),
Filter("PCA" :: 1 :: HNil, 16, 0.45)),
clusteringParams = ClusteringParams(),
scaleSelection = histogram(12))
in$.onNext(tdaParams)
In [9]:
import org.apache.spark.rdd.RDD
import org.apache.commons.lang.StringUtils.trim
import org.apache.spark.mllib.linalg.Vectors
def readDenseData(file: String) =
sc.
textFile(file).
map(_.split(",").map(trim)).
zipWithIndex.
map{ case (a, idx) => dp(idx, Vectors.dense(a.map(_.toDouble))) }
def readMnist(file: String): RDD[DataPoint] =
sc.
textFile(file).
map(s => {
val columns = s.split(",").map(trim).toList
columns match {
case cat :: rawFeatures =>
val nonZero =
rawFeatures.
map(_.toInt).
zipWithIndex.
filter{ case (v, idx) => v != 0 }.
map{ case (v, idx) => (idx, v.toDouble) }
val sparseFeatures = Vectors.sparse(rawFeatures.size, nonZero)
(cat, sparseFeatures)
}}).
zipWithIndex.
map {case ((cat, features), idx) => IndexedDataPoint(idx.toInt, features, Some(Map("cat" -> cat)))}
In [10]:
val mnist_path = "/Users/tmo/Work/batiskav/projects/plongeur/scala/plongeur-spark/src/test/resources/mnist/"
val mnist_train = mnist_path + "mnist_train.csv"
In [11]:
val mnistRDD = readMnist(mnist_train)
In [12]:
val mnistSample5pctRDD = mnistRDD.sample(false, .05, 0l).cache
In [13]:
mnistSample5pctRDD.count
Out[13]:
In [14]:
val ctx = TDAContext(sc, mnistSample5pctRDD)
Turn a TDAResult into a data structure.
In [25]:
val r = scala.util.Random
def format(result: TDAResult) = Map(
"nodes" -> result.clusters.map(c =>
Map(
"id" -> c.id.toString,
"label" -> c.id.toString,
"size" -> c.dataPoints.size,
"x" -> r.nextInt(100),
"y" -> r.nextInt(100))),
"edges" -> result.edges.map(e => {
val (from, to) = e.toArray match {case Array(f, t) => (f, t)}
Map(
"id" -> s"$from--$to",
"source" -> from.toString,
"target" -> to.toString)}))
Run the machine, obtaining an Observable of TDAResult instances
In [18]:
val out$: Observable[(TDAParams, TDAResult)] = TDAMachine.run(ctx, in$)
In [19]:
val out$_subRef = SubRef()
In [26]:
out$_subRef.update(
out$.subscribe(
onNext = (t) => t match {case (p, r) => channel("ch_TDA_1").set("result", format(r))},
onError = (e) => println("Error in TDA machine: ", e)))
In [21]:
val pipe$_subRef = SubRef()
val nrBins$ = PublishSubject[Int]
val overlap$ = PublishSubject[Percentage]
val scaleBins$ = PublishSubject[Int]
val collapse$ = PublishSubject[Boolean]
In [22]:
channel("ch_TDA_1").watch("nrBins", (_: Any, v: Int) => nrBins$.onNext(v))
channel("ch_TDA_1").watch("overlap", (_: Any, v: Int) => overlap$.onNext(BigDecimal(v) / 100))
channel("ch_TDA_1").watch("scaleBins", (_: Any, v: Int) => scaleBins$.onNext(v))
channel("ch_TDA_1").watch("collapse", (_: Any, v: Boolean) => collapse$.onNext(v))
In [32]:
channel("ch_test").getClass
Out[32]:
In [23]:
import TDAParams._
val BASE =
TDAParams(
lens = TDALens(
Filter("PCA" :: 0 :: HNil, 50, 0.5)),
clusteringParams = ClusteringParams(),
scaleSelection = histogram(50),
collapseDuplicateClusters = false)
val params$ =
List(
nrBins$.map(v => setFilterNrBins(0, v)),
overlap$.map(v => setFilterOverlap(0, v)),
scaleBins$.map(v => setHistogramScaleSelectionNrBins(v)),
collapse$.map(v => (params: TDAParams) => params.copy(collapseDuplicateClusters = v))).
reduce(_ merge _).
scan(BASE)((params, fn) => fn(params))
pipe$_subRef.update(params$.subscribe(in$))
channel("ch_TDA_1").set("nrBins", BASE.lens.filters(0).nrBins)
channel("ch_TDA_1").set("overlap", (BASE.lens.filters(0).overlap * 100).toInt)
channel("ch_TDA_1").set("scaleBins", 50)
channel("ch_TDA_1").set("collapse", BASE.collapseDuplicateClusters)
We create two slider widgets that provide the inputs for the nrBins$ and overlap$ Observables.
In [28]:
%%html
<template is='urth-core-bind' channel='ch_TDA_1'>
<table class="clean">
<tr class="title">
<th>Filter:</th>
<th colspan="2" class="code">
"PCA" :: 0 :: HNil
</th>
</tr>
<tr>
<th>nr of cover bins</th>
<td class="wide">
<paper-slider min="0" max="100" step="1" value="{{nrBins}}"></paper-slider>
</td>
<td>[[nrBins]]</td>
</tr>
<tr>
<th>overlap</th>
<td>
<paper-slider min="0" max="75" step="1" value="{{overlap}}"></paper-slider>
</td>
<td>[[overlap]]%</td>
</tr>
<tr>
<th>nr of scale bins</th>
<td>
<paper-slider min="5" max="150" step="1" value="{{scaleBins}}"></paper-slider>
</td>
<td>[[scaleBins]]</td>
</tr>
<tr>
<th>collapse duplicates</th>
<td colspan="2">
<paper-toggle-button checked="{{collapse}}"/>
</td>
</tr>
</table>
</template>
Out[28]:
In [27]:
%%html
<style>
table.clean th {
border-style: hidden;
white-space: nowrap;
}
table.clean td {
border-style: hidden;
}
tr.title {
text-align: center;
background-color: beige;
}
td.wide {
width: 500px;
}
td.wide paper-slider {
width: 100%;
}
th.code {
font-family: courier new,monospace;
}
</style>
Out[27]:
In [23]:
%%html
<template is='urth-core-bind' channel='ch_TDA_1'>
<plongeur-graph data="{{result}}"></plongeur-graph>
</template>
Out[23]:
In [ ]:
In [24]:
%%html
<template is='urth-core-bind' channel='ch_TDA_1'>
<div style='background: #FFB; padding: 10px;'>
<span style='font-family: "Courier"'>[[params]]</span>
</div>
</template>
Out[24]:
In [ ]: